529 research outputs found
Negative Bias Temperature Instability And Charge Trapping Effects On Analog And Digital Circuit Reliability
Nanoscale p-channel transistors under negative gate bias at an elevated temperature show threshold voltage degradation after a short period of stress time. In addition, nanoscale (45 nm) n-channel transistors using high-k (HfO2) dielectrics to reduce gate leakage power for advanced microprocessors exhibit fast transient charge trapping effect leading to threshold voltage instability and mobility reduction. A simulation methodology to quantify the circuit level degradation subjected to negative bias temperature instability (NBTI) and fast transient charge trapping effect has been developed in this thesis work. Different current mirror and two-stage operation amplifier structures are studied to evaluate the impact of NBTI on CMOS analog circuit performances for nanoscale applications. Fundamental digital circuit such as an eleven-stage ring oscillator has also been evaluated to examine the fast transient charge transient effect of HfO2 high-k transistors on the propagation delay of ring oscillator performance. The preliminary results show that the negative bias temperature instability reduces the bandwidth of CMOS operating amplifiers, but increases the amplifier\u27s voltage gain at mid-frequency range. The transient charge trapping effect increases the propagation delay of ring oscillator. The evaluation methodology developed in this thesis could be extended to study other CMOS device and circuit reliability issues subjected to electrical and temperature stresses
THE MARKET REACTION TO STOCK SPLIT ON ACTUAL STOCK SPLIT DAY
It is well documented in the literature that there are positive abnormal returns on the announcement days of stock splits. However, few studies investigated the stock return on the actual split day. We examine market reaction on the actual split day and find that it is positive. We also find a negative relationship between the market reaction and firm size as well as the previous trading volume. The result is in support of the inattention theory
Recommended from our members
An Application on Text Classification Based on Granular Computing
Machine learning is the key to text classification, a granular computing approach to machine learning is applied to learning classification rules by considering the two basic issues: concept formation and concept relationships identification. In this paper, we concentrate on the selection of a single granule in each step to construct a granule network. A classification rule induction method is proposed
Direct Acyclic Graph based Ledger for Internet of Things: Performance and Security Analysis
Direct Acyclic Graph (DAG)-based ledger and the corresponding consensus
algorithm has been identified as a promising technology for Internet of Things
(IoT). Compared with Proof-of-Work (PoW) and Proof-of-Stake (PoS) that have
been widely used in blockchain, the consensus mechanism designed on DAG
structure (simply called as DAG consensus) can overcome some shortcomings such
as high resource consumption, high transaction fee, low transaction throughput
and long confirmation delay. However, the theoretic analysis on the DAG
consensus is an untapped venue to be explored. To this end, based on one of the
most typical DAG consensuses, Tangle, we investigate the impact of network load
on the performance and security of the DAG-based ledger. Considering unsteady
network load, we first propose a Markov chain model to capture the behavior of
DAG consensus process under dynamic load conditions. The key performance
metrics, i.e., cumulative weight and confirmation delay are analysed based on
the proposed model. Then, we leverage a stochastic model to analyse the
probability of a successful double-spending attack in different network load
regimes. The results can provide an insightful understanding of DAG consensus
process, e.g., how the network load affects the confirmation delay and the
probability of a successful attack. Meanwhile, we also demonstrate the
trade-off between security level and confirmation delay, which can act as a
guidance for practical deployment of DAG-based ledgers.Comment: accepted by IEEE Transactions on Networkin
CFI2P: Coarse-to-Fine Cross-Modal Correspondence Learning for Image-to-Point Cloud Registration
In the context of image-to-point cloud registration, acquiring point-to-pixel
correspondences presents a challenging task since the similarity between
individual points and pixels is ambiguous due to the visual differences in data
modalities. Nevertheless, the same object present in the two data formats can
be readily identified from the local perspective of point sets and pixel
patches. Motivated by this intuition, we propose a coarse-to-fine framework
that emphasizes the establishment of correspondences between local point sets
and pixel patches, followed by the refinement of results at both the point and
pixel levels. On a coarse scale, we mimic the classic Visual Transformer to
translate both image and point cloud into two sequences of local
representations, namely point and pixel proxies, and employ attention to
capture global and cross-modal contexts. To supervise the coarse matching, we
propose a novel projected point proportion loss, which guides to match point
sets with pixel patches where more points can be projected into. On a finer
scale, point-to-pixel correspondences are then refined from a smaller search
space (i.e., the coarsely matched sets and patches) via well-designed sampling,
attentional learning and fine matching, where sampling masks are embedded in
the last two steps to mitigate the negative effect of sampling. With the
high-quality correspondences, the registration problem is then resolved by EPnP
algorithm within RANSAC. Experimental results on large-scale outdoor benchmarks
demonstrate our superiority over existing methods
Exploring the Optimal Choice for Generative Processes in Diffusion Models: Ordinary vs Stochastic Differential Equations
The diffusion model has shown remarkable success in computer vision, but it
remains unclear whether ODE-based probability flow or SDE-based diffusion
models are superior and under what circumstances. Comparing the two is
challenging due to dependencies on data distribution, score training, and other
numerical factors. In this paper, we examine the problem mathematically by
examining two limiting scenarios: the ODE case and the large diffusion case. We
first introduce a pulse-shape error to perturb the score function and analyze
error accumulation, with a generalization to arbitrary error. Our findings
indicate that when the perturbation occurs at the end of the generative
process, the ODE model outperforms the SDE model (with a large diffusion
coefficient). However, when the perturbation occurs earlier, the SDE model
outperforms the ODE model, and we demonstrate that the error of sample
generation due to pulse-shape error can be exponentially suppressed as the
diffusion term's magnitude increases to infinity. Numerical validation of this
phenomenon is provided using toy models such as Gaussian, Gaussian mixture
models, and Swiss roll. Finally, we experiment with MNIST and observe that
varying the diffusion coefficient can improve sample quality even when the
score function is not well trained
- …